156 research outputs found

    Novel decoupling networks for small antenna arrays

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Small Interference RNA Targeting TLR4 Gene Effectively Attenuates Pulmonary Inflammation in a Rat Model

    Get PDF
    Objective. The present study was to investigate the feasibility of adenovirus-mediated small interference RNA (siRNA) targeting Toll-like receptor 4 (TLR4) gene in ameliorating lipopolysaccharide- (LPS-) induced acute lung injury (ALI). Methods. In vitro, alveolar macrophages (AMs) were treated with Ad-siTLR4 and Ad-EFGP, respectively, for 12 h, 24 h, and 48 h, and then with LPS (100 ng/mL) for 2 h, and the function and expression of TLR4 were evaluated. In vivo, rats received intratracheal injection of 300 μL of normal saline (control group), 300 μL of Ad-EGFP (Ad-EGFP group), or 300 μL of Ad-siTLR4 (Ad-siTLR4 group) and then were intravenously treated with LPS (50 mg/kg) to induce ALI. Results. Ad-siTLR4 treatment significantly reduced TLR4 expression and production of proinflammatory cytokines following LPS treatment both in vitro and in vivo. Significant alleviation of tissue edema, microvascular protein leakage, and neutrophil infiltration was observed in the AdsiTLR4-treated animals. Conclusion. TLR4 plays a critical role in LPS-induced ALI, and transfection of Ad-siTLR4 can effectively downregulate TLR4 expression in vitro and in vivo, accompanied by alleviation of LPS-induced lung injury. These findings suggest that TLR4 may serve as a potential target in the treatment of ALI and RNA interfering targeting TLR4 expression represents a therapeutic strategy

    The Devil is the Classifier: Investigating Long Tail Relation Classification with Decoupling Analysis

    Full text link
    Long-tailed relation classification is a challenging problem as the head classes may dominate the training phase, thereby leading to the deterioration of the tail performance. Existing solutions usually address this issue via class-balancing strategies, e.g., data re-sampling and loss re-weighting, but all these methods adhere to the schema of entangling learning of the representation and classifier. In this study, we conduct an in-depth empirical investigation into the long-tailed problem and found that pre-trained models with instance-balanced sampling already capture the well-learned representations for all classes; moreover, it is possible to achieve better long-tailed classification ability at low cost by only adjusting the classifier. Inspired by this observation, we propose a robust classifier with attentive relation routing, which assigns soft weights by automatically aggregating the relations. Extensive experiments on two datasets demonstrate the effectiveness of our proposed approach. Code and datasets are available in https://github.com/zjunlp/deepke

    Design of Highly Isolated Compact Antenna Array for MIMO Applications

    Get PDF
    In order to achieve very high data rates in both the uplink and downlink channels, the multiple antenna systems are used within the mobile terminal as well as the base station of the future generation of mobile networks. When implemented in a size limited platform, the multiple antenna arrays suffer from strong mutual coupling between closely spaced array elements. In this paper, a rigorous procedure for the design of a 4-port compact planar antenna array with high port isolation is presented. The proposed design involves a decoupling network consisting of reactive elements, whose values can be obtained by the method of eigenmode analysis. Numerical results show the effectiveness of the proposed design approach in improving the port isolation of a compact four-element planar array

    KoRC: Knowledge oriented Reading Comprehension Benchmark for Deep Text Understanding

    Full text link
    Deep text understanding, which requires the connections between a given document and prior knowledge beyond its text, has been highlighted by many benchmarks in recent years. However, these benchmarks have encountered two major limitations. On the one hand, most of them require human annotation of knowledge, which leads to limited knowledge coverage. On the other hand, they usually use choices or spans in the texts as the answers, which results in narrow answer space. To overcome these limitations, we build a new challenging benchmark named KoRc in this paper. Compared with previous benchmarks, KoRC has two advantages, i.e., broad knowledge coverage and flexible answer format. Specifically, we utilize massive knowledge bases to guide annotators or large language models (LLMs) to construct knowledgable questions. Moreover, we use labels in knowledge bases rather than spans or choices as the final answers. We test state-of-the-art models on KoRC and the experimental results show that the strongest baseline only achieves 68.3% and 30.0% F1 measure in the in-distribution and out-of-distribution test set, respectively. These results indicate that deep text understanding is still an unsolved challenge. The benchmark dataset, leaderboard, and baseline methods are released in https://github.com/THU-KEG/KoRC
    corecore